24 - Pattern Recognition (PR) [ID:2681]
50 von 411 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

Okay, so welcome back to pattern recognition everybody. So far we have talked about a lot

of topics and I heard you didn't have a summary of the topics of this course in the last week

so we're going to do a summary right now. Does anybody remember what we have been talking

about so far? So let's start. So we were talking about pattern recognition, right? And let's

start with this cloud that you are used to draw or to look at. So what are we talking

about in pattern recognition? Well, pattern recognition deals with assigning a feature

observation, a feature vector, a certain class Y, right? So what we want to find is a decision

rule and this decision rule tells us how to assign a vector X to a class Y. Okay, so this

is pattern recognition. Now we're done, right? Yeah. Okay. Maybe if you are in the oral exam

this is probably not the whole thing. Maybe you will not have a very good score if you

say this is pattern recognition and we're done here. Okay. So we heard about this and the

next topic, so I'm not summarizing all the topics that we talked about and that you talked

about. I'm just picking out certain examples here right now. So next thing that was one

of the many important topics that you talked about is the Bayes classifier. What's the Bayes

classifier? Do you remember anything with the Bayes classifier? Yes? Yes. And what else

can you tell about it? Exactly. It's optimal with respect to the zero one cost function.

What does the zero one cost function mean? Yeah. Okay. So a misclassification costs one

and a correct classification doesn't cause any cost. Okay. So when speaking about the

Bayes classifier it would be also nice to remember what the Bayes rule actually was. Who remembers

the Bayes rule? Yes? So this is something with constrained probabilities, right? So if we

have some event P of A given B, then the base rule tells us how can we compute that? Yes?

It's a joint probability divided by the probability of P. Yes. And you can also rewrite the joint

probability to a constrained one. We can sum up the constraint probability of P dependent

on B, dependent on A, summed up over A. B dependent on A, summed up. You can also multiply

with the probability of A, right? And then, so this is basically the joint probability

here and then you divide by the probability of P, right? Okay. Good. So you remember this.

Then we were talking about posterior PDF modelling, right? This is posterior PDF modelling.

And what was the thing here that we were talking about? Well, we had two different ways of

modelling classes, right? So one way we could model a class was that we have the probability

of an observation X given some parametrization theta for a class K, right? Class K. And the

other one, this is really an annoying pen. Well, anyway, versus the modelling of the

probability of the class directly, right? So what's the probability of the class? And

now let's say this is the class K given the observation X. Okay. And you remember which

two kind of models we get from that? So if I start modelling the distribution of X given

a class K, this somehow describes how a certain class is distributed, right? And the other

distribution describes given an observation X how you can determine the probability for

a certain class. And if you think about that, let's just make a short excursion here. So

let's say we have two classes. And we have here the probability and here some Xs over

X. Then we can have, for example, if there is a Gaussian distribution, we could find

one class that's probably distributed like this. So this is class one. And then we have

some other class that is distributed like this. And this is class two. And now this

is basically our probability of X given the class K. These are the two. And you've seen

that if you now want to determine the probability of the class Y given the class, sorry, we

just had the class K, of the class K given Y, that you can do that by describing the

boundary between those two. In fact, if you set this up, you can see that you can actually

solve this for, so you can actually see that this is the complete distribution mass. So

you could rewrite this as Pk given the probability of X given K divided over the probability

of X, right? And the probability of X can be rewritten as this marginal where we sum

over all the classes. And then we have Pk Px of K. Maybe I shouldn't call this K again.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:07:58 Min

Aufnahmedatum

2013-01-15

Hochgeladen am

2013-01-16 11:23:27

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen